Modeling Category Learning with Exemplars and Prior Knowledge

نویسنده

  • Harlan D. Harris
چکیده

An open question in category learning research is how prior knowledge affects the process of learning new concepts. Rehder and Murphy’s (2003) Knowledge Resonance (KRES) model of concept learning uses an interactive neural network to account for many observed effects related to prior knowledge, but cannot account for the learning of nonlinearly separable concepts. In this work, we extend the KRES model by adding exemplar nodes. The new model accounts for the fact that linearly separable concepts are not necessarily easier than nonlinearly separable concepts (Medin & Schwanenflugel, 1981), and more importantly, accounts for a notable interaction between the presence of useful prior knowledge and linear separability (Wattenmaker, Dewey, Murphy, &Medin, 1986). Two architectural variants of the model were tested, and the dependence of good results on a particular architecture, indicates how formal modeling can uncover facts about how the prior knowledge which influences concept learning is used and represented. Most current theories of category learning address how new concepts are learned on the basis of empirical regularities in the environment. Considerable progress has been made in determining how learners encode empirical information about how features, and sets of features, covary with category labels. However, these models fail to account for the important role of prior knowledge. Other models of category learning address the effects of prior knowledge, but they in turn fail to account for the wide range of empirical effects that have been observed. The work reported here aims to integrate these two veins of concept learning research. Prior knowledge is known to have a number of effects on concept learning. When knowledge is related to a learning task, learning is often faster (Murphy & Allopenna, 1994; Wattenmaker et al., 1986). In addition, when new concepts are related to prior knowledge, structural effects that have been found in empirical concept learning studies may not be found or may even be reversed (Pazzani, 1991; Wattenmaker et al., 1986). In this research we introduce a new category learning model whose goal is to account for effects of both prior knowledge and empirical regularities on concept learning. We address the question of which kinds of representations (exemplars? prototypes? rules?) are involved in learning tasks and how those representations become related to one another and to representations of prior knowledge as a result of experience. By fitting variants of our new model to two human learning data sets, we will show that only a very particular pattern of connectivity among representations is warranted. We pursue this question by extending an existing model of category learning, the Knowledge Resonance (KRES) model introduced by Rehder and Murphy (2003). KRES is a connectionist model of knowledge effects in concept learning that uses interactive activation among representations of stimulus features, category labels, and prior knowledge, then uses a supervised learning algorithm called Contrastive Hebbian Learning (CHL: O’Reilly, 1996) to learn symmetrical weights between the representations. KRES accounts for effects of prior knowledge on learning rate, generalization patterns and reaction time. KRES builds on the Baywatch model introduced by Heit and Bott (2000). Baywatch is a standard feedforward connectionist network supplemented with prior concept nodes that can be used as the basis for categorization. Baywatch accounts for knowledge effects on responses to novel but knowledge-related features, as well as to prior knowledge that is incongruent with the empirical stimuli (Heit, Briggs, & Bott, 2004). KRES goes beyond Baywatch in being able to also represent prior knowledge that relates stimulus features to (a) one another, and (b) concept nodes (allowing the model to account for “top down” effects in learning). Nevertheless, the published versions of both Baywatch and KRES have a significant restriction. They are limited to learning linearly separable concepts. Their architectures are similar to a classical prototype model, where the weights compute a monotonic function of the input representation. However, people are able to learn nonlinearly separable concepts, and often find such concepts as easy to learn as linearly separable ones (Medin & Schwanenflugel, 1981). In part as a result of this, exemplar models of classification (Medin & Schaffer, 1978; Nosofsky, 1986; Kruschke, 1992) have become prominent, as they naturally account for learning of nonlinearly separable concepts. Some recent work has challenged exemplar models (Smith & Minda, 1998, but see Nosofsky & Zaki, 2002; Rehder & Hoffman, 2005), and some recent models of classification have proposed alternatives, such as using clusters instead of exhaustive sets of exemplars (Love, Medin, & Gureckis, 2004). However, our goal is to build a new model with the ability to learn nonlinearly separable concepts, and exemplars are a reasonable starting place with much empirical evidence to Figure 1: Architecture of new KRES network. I = input nodes; O = output nodes; P = prior knowledge nodes; E = exemplar nodes. Connections depicted with solid lines are fixed weights; with fine dashed lines are set-once exemplar weights; with dashed lines are CHL-learnable weights. The KRES/EFK model includes all three of the links labeled E, F, and K; other models include subsets. support them. We first describe how we incorporated exemplars into KRES, noting several possible architectural variations. We then give the results of our work simulating experiments by Medin and Schwanenflugel (1981) and Wattenmaker et al. (1986), focusing on the architectural variations and their implications for theories of concept learning. We conclude with a discussion of the results and their implications for a comprehensive theory of category learning with and without prior knowledge.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Subtyping as a knowledge preservation strategy in category learning.

Subtyping occurs when atypical examples are excluded from consideration in judging a category. In three experiments, we investigated whether subtyping can influence category learning. In each experiment, participants learned about a category where most, but not all, of the exemplars corresponded to a theme. The category structure included a subtyping dimension, which had one value for theme-con...

متن کامل

How does prior knowledge affect implicit and explicit concept learning?

Two experiments investigated the effect of prior knowledge on implicit and explicit learning. Implicit as opposed to explicit learning is sometimes characterized as unselective or purely statistical. During training, one group of participants was presented with category exemplars whose features could be tied together by integrative knowledge, whereas another group saw category exemplars with un...

متن کامل

Distributional expectations and the induction of category structure.

Previous research on how categories are learned from observation of exemplars has largely ignored the possible role of prior expectations concerning how exemplars will be distributed. The experiments reported here explored this issue by presenting subjects with category-learning tasks in which the distributions of exemplars defining the categories were varied. In Experiments 1 and 2 the distrib...

متن کامل

Modeling Interference Effects In Instructed Category Learning

Category learning is often seen as a process of inductive generalization from a set of class-labeled exemplars. Human learners, however, often receive direct instruction concerning the structure of a category before being presented with examples. Such explicit knowledge may often be smoothly integrated with knowledge garnered by exposure to instances, but some interference effects have been obs...

متن کامل

Role of Prior Knowledge on Naming and Lexical Decisions with Good and Poor Stimulus Information

To what extent does prior knowledge of a superordinate category facilitate recognition of an instance of that category? In Experiment 1, observers named good and poor exemplars of categories with and without prior presentation of the category name. The test words were either printed in a normal upright position, or the words were rotated 180°. Prior priming with the category name shortened nami...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006